About the project

Write a short description about the course and add a link to your GitHub repository here. This is an R Markdown (.Rmd) file so you can use R Markdown syntax.

Hello, I am Nea. I am a 3rd year PhD student in Doctoral Programme in Molecular Medicine. I recently started to learn coding with Python to automate some data analysis workflows. I also got tired of using licensed software like Prism only at the university to make graphs and I was not happy with the limited options in these softwares. I attended this course to learn also R which is more used in biomedicine than Python and I wish to develop bioinformatics skills to benefit from data more.

Chapter 2: Regression and model validation

In this exercise I analyze preprocessed data.

Original data and preprocessing

Data is from Introduction to statistics course learning approaches study carried out in Helsinki 2014 and it is a part of multinational study. Approaches are defined by ASSIST guidelines and divided into three categories: Deep, Surface and Strategic study approaches. Students participated by filling in questionaires relating to these 3 approaches by answering 1-5 (least agreeing - most agreeing). In addition, data contains a measure for each student’s attitude and achievements based on exam points.

In preprocessing, questions were divided into Deep, Surface and Strategic groups and for each group mean was calculated for each student. Analysis dataset includes additional information on gender, attitude and exam points. Students who had 0 points from exam were excluded.

Read and explore data

data <- read.csv('data/analysis_dataset.csv')
head(data)
##   gender age attitude points     deep  stra     surf
## 1      F  53      3.7     25 3.583333 3.375 2.583333
## 2      M  55      3.1     12 2.916667 2.750 3.166667
## 3      F  49      2.5     24 3.500000 3.625 2.250000
## 4      M  53      3.5     10 3.500000 3.125 2.250000
## 5      M  49      3.7     22 3.666667 3.625 2.833333
## 6      F  38      3.8     21 4.750000 3.625 2.416667
dim(data)
## [1] 166   7
str(data)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...

Data contains 7 variables (columns) from 166 observations (students, rows). All variables are numeric or integers except gender which has 2 factors (female or male).

Overview of the data

summary(data)
##  gender       age           attitude         points           deep      
##  F:110   Min.   :17.00   Min.   :1.400   Min.   : 7.00   Min.   :1.583  
##  M: 56   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:19.00   1st Qu.:3.333  
##          Median :22.00   Median :3.200   Median :23.00   Median :3.667  
##          Mean   :25.51   Mean   :3.143   Mean   :22.72   Mean   :3.680  
##          3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:27.75   3rd Qu.:4.083  
##          Max.   :55.00   Max.   :5.000   Max.   :33.00   Max.   :4.917  
##       stra            surf      
##  Min.   :1.250   Min.   :1.583  
##  1st Qu.:2.625   1st Qu.:2.417  
##  Median :3.188   Median :2.833  
##  Mean   :3.121   Mean   :2.787  
##  3rd Qu.:3.625   3rd Qu.:3.167  
##  Max.   :5.000   Max.   :4.333
library(ggplot2)
library(GGally)
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
p <- ggpairs(data, title = "Correlation summary graphs", mapping = aes(col = gender, alpha = 0.4), lower = list(combo = wrap('facethist', bins = 20)))

p

Data contains more females than males but other variables are not dependent on gender. Participants age between 17-55. Gender independent highest (positive) correlation is between points and attitude (0.437) and lowest (negative) correlation between points and deep learning approaches (-0.0101). This could suggest that exams do not measure level of deep learning. However, points correlate best with strategic approaches (positive correlation), suggesting that focus on planning and scheduling could improve exam points.

Multiple linear regression fitting

To try to explain target variable points, I selected 3 variables that had highest negative or positive correlation (attitude, stra, surf) with it and fitted linear regression.

linear_3_variables <- lm(points ~ attitude + stra + surf, data = data)
summary(linear_3_variables)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

Attitude is the only significant variable to explain points.

Single linear regression (attitude)

linear_attitude <- lm(points ~ attitude, data = data)
summary(linear_attitude)
## 
## Call:
## lm(formula = points ~ attitude, data = data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.6372     1.8303   6.358 1.95e-09 ***
## attitude      3.5255     0.5674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

By removing non-significant variables, Adj. R2 is marginally reducing, meaning that slightly less of the variance in points is explained by attitude. However, F-statistic p-value is also decreasing in a model with attitude variable alone, suggesting that the fit of a single regression model would be better for the data. Both of the models give relatively high residual std errors when compared to 1st and 3rd residual quantiles

Diagnostic plots for single regression

Residuals vs Fitted values, Normal QQ-plot and and Residuals vs Leverage diagnostic plots produced.

plot_diagnostics2 <- plot(linear_attitude, which = c(1, 2, 5), par(mfrow = c(1,2)))

Residuals vs. Fitted values: Reasonable constant variation of residual errors, graph should not show any patterns.
QQ-plot: Normality of errors is reasonable
Leverage: Indicates the impact of individual points on the fitted model. My model has reasonable leverage.

Testing stepwise multiple regression

I realized it would be slow to observe manually all possible variable combinations, so I looked for an automated one to ensure I find the best model. Here I test ols_step_both_p function from olsrr package. It is selecting variables to the model based on their p values and the thresholds to include or exclude a variable can be manually adjusted (more information here https://www.guru99.com/r-simple-multiple-linear-regression.html.

library(olsrr)
## 
## Attaching package: 'olsrr'
## The following object is masked from 'package:datasets':
## 
##     rivers
fit <- lm(points ~ factor(gender) + age + attitude + deep + stra + surf, data = data) 
best <- ols_step_both_p(fit)
## Stepwise Selection Method   
## ---------------------------
## 
## Candidate Terms: 
## 
## 1. factor(gender) 
## 2. age 
## 3. attitude 
## 4. deep 
## 5. stra 
## 6. surf 
## 
## We are selecting variables based on p value...
## 
## Variables Entered/Removed: 
## 
## - attitude added 
## - stra added 
## - age added 
## 
## No more variables to be added/removed.
## 
## 
## Final Model Output 
## ------------------
## 
##                         Model Summary                          
## --------------------------------------------------------------
## R                       0.467       RMSE                5.260 
## R-Squared               0.218       Coef. Var          23.156 
## Adj. R-Squared          0.204       MSE                27.671 
## Pred R-Squared          0.176       MAE                 4.120 
## --------------------------------------------------------------
##  RMSE: Root Mean Square Error 
##  MSE: Mean Square Error 
##  MAE: Mean Absolute Error 
## 
##                                 ANOVA                                 
## ---------------------------------------------------------------------
##                 Sum of                                               
##                Squares         DF    Mean Square      F         Sig. 
## ---------------------------------------------------------------------
## Regression    1250.931          3        416.977    15.069    0.0000 
## Residual      4482.762        162         27.671                     
## Total         5733.693        165                                    
## ---------------------------------------------------------------------
## 
##                                   Parameter Estimates                                    
## ----------------------------------------------------------------------------------------
##       model      Beta    Std. Error    Std. Beta      t        Sig      lower     upper 
## ----------------------------------------------------------------------------------------
## (Intercept)    10.895         2.648                  4.114    0.000     5.666    16.125 
##    attitude     3.481         0.562        0.431     6.191    0.000     2.371     4.591 
##        stra     1.004         0.534        0.131     1.878    0.062    -0.051     2.059 
##         age    -0.088         0.053       -0.116    -1.664    0.098    -0.193     0.016 
## ----------------------------------------------------------------------------------------
best
## 
##                              Stepwise Selection Summary                               
## -------------------------------------------------------------------------------------
##                      Added/                   Adj.                                       
## Step    Variable    Removed     R-Square    R-Square     C(p)        AIC        RMSE     
## -------------------------------------------------------------------------------------
##    1    attitude    addition       0.191       0.186    5.3960    1029.9875    5.3197    
##    2      stra      addition       0.205       0.195    4.4470    1029.0379    5.2888    
##    3      age       addition       0.218       0.204    3.6840    1028.2247    5.2604    
## -------------------------------------------------------------------------------------

Best fit diagnostic plots

best_fit <- lm(points ~ attitude + stra + age, data = data)
summary(best_fit)
## 
## Call:
## lm(formula = points ~ attitude + stra + age, data = data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -18.1149  -3.2003   0.3303   3.4129  10.7599 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 10.89543    2.64834   4.114 6.17e-05 ***
## attitude     3.48077    0.56220   6.191 4.72e-09 ***
## stra         1.00371    0.53434   1.878   0.0621 .  
## age         -0.08822    0.05302  -1.664   0.0981 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.26 on 162 degrees of freedom
## Multiple R-squared:  0.2182, Adjusted R-squared:  0.2037 
## F-statistic: 15.07 on 3 and 162 DF,  p-value: 1.07e-08
best_fit_diagn <- plot(best_fit, which = c(1, 2, 5), par(mfrow = c(1,2)))

Comparison of different models

Anova to check if one model is significantly better than other. Looks like best_fit model could be a bit better than attitude + sta + surf, although significant only at 0.1 level.

anova <- anova(linear_3_variables, linear_attitude, best_fit)
anova
## Analysis of Variance Table
## 
## Model 1: points ~ attitude + stra + surf
## Model 2: points ~ attitude
## Model 3: points ~ attitude + stra + age
##   Res.Df    RSS Df Sum of Sq      F  Pr(>F)  
## 1    162 4544.4                              
## 2    164 4641.1 -2   -96.743 1.7244 0.18154  
## 3    162 4482.8  2   158.355 2.8226 0.06238 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Parameters from all three models in one table:

library(stargazer)
## 
## Please cite as:
##  Hlavac, Marek (2018). stargazer: Well-Formatted Regression and Summary Statistics Tables.
##  R package version 5.2.2. https://CRAN.R-project.org/package=stargazer
table <- stargazer(linear_3_variables, linear_attitude, best_fit, title='Results', type="html", align = TRUE)
Results
Dependent variable:
points
(1) (2) (3)
attitude 3.395*** 3.525*** 3.481***
(0.574) (0.567) (0.562)
stra 0.853 1.004*
(0.542) (0.534)
surf -0.586
(0.801)
age -0.088*
(0.053)
Constant 11.017*** 11.637*** 10.895***
(3.684) (1.830) (2.648)
Observations 166 166 166
R2 0.207 0.191 0.218
Adjusted R2 0.193 0.186 0.204
Residual Std. Error 5.296 (df = 162) 5.320 (df = 164) 5.260 (df = 162)
F Statistic 14.132*** (df = 3; 162) 38.608*** (df = 1; 164) 15.069*** (df = 3; 162)
Note: p<0.1; p<0.05; p<0.01

Multicollinearity detection

corrplot to visualize correlations between different variables easily.

library(corrplot)
## corrplot 0.84 loaded
#Select only numeric columns
drops <- c("gender")
numeric_df <- data[ , !(names(data) %in% drops)]
head(numeric_df)
##   age attitude points     deep  stra     surf
## 1  53      3.7     25 3.583333 3.375 2.583333
## 2  55      3.1     12 2.916667 2.750 3.166667
## 3  49      2.5     24 3.500000 3.625 2.250000
## 4  53      3.5     10 3.500000 3.125 2.250000
## 5  49      3.7     22 3.666667 3.625 2.833333
## 6  38      3.8     21 4.750000 3.625 2.416667
cor1 = cor(numeric_df)
corrplot.mixed(cor1)

mctest and ppcor packages to test multicollinearity. Multicollinearity was detected between surf and deep. However, deep was not included in any of the regression models so this multicollinearity is not affecting the models.

library(mctest)
library(ppcor)
## Loading required package: MASS
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:olsrr':
## 
##     cement
omcdiag(numeric_df[,c(1:2,4:6)],numeric_df$points)
## 
## Call:
## omcdiag(x = numeric_df[, c(1:2, 4:6)], y = numeric_df$points)
## 
## 
## Overall Multicollinearity Diagnostics
## 
##                        MC Results detection
## Determinant |X'X|:         0.8167         0
## Farrar Chi-Square:        32.8984         1
## Red Indicator:             0.1478         0
## Sum of Lambda Inverse:     5.4072         0
## Theil's Method:           -0.5624         0
## Condition Number:         35.1278         1
## 
## 1 --> COLLINEARITY is detected by the test 
## 0 --> COLLINEARITY is not detected by the test
imcdiag(numeric_df[,c(1:2,4:6)],numeric_df$points)
## 
## Call:
## imcdiag(x = numeric_df[, c(1:2, 4:6)], y = numeric_df$points)
## 
## 
## All Individual Multicollinearity Diagnostics Result
## 
##             VIF    TOL     Wi     Fi Leamer   CVIF Klein   IND1   IND2
## age      1.0280 0.9728 1.1251 1.5095 0.9863 1.0420     0 0.0242 0.3754
## attitude 1.0363 0.9650 1.4596 1.9583 0.9823 1.0505     0 0.0240 0.4832
## deep     1.1239 0.8898 4.9871 6.6908 0.9433 1.1393     0 0.0221 1.5220
## stra     1.0371 0.9642 1.4925 2.0023 0.9820 1.0513     0 0.0240 0.4936
## surf     1.1820 0.8460 7.3251 9.8275 0.9198 1.1982     0 0.0210 2.1257
## 
## 1 --> COLLINEARITY is detected by the test 
## 0 --> COLLINEARITY is not detected by the test
## 
## age , deep , stra , surf , coefficient(s) are non-significant may be due to multicollinearity
## 
## R-square of y on all x: 0.2311 
## 
## * use method argument to check which regressors may be the reason of collinearity
## ===================================
pcor(numeric_df[,c(1:2,4:6)], method=c("pearson"))
## $estimate
##                   age     attitude        deep        stra       surf
## age       1.000000000 -0.004073822 -0.02583547  0.08280593 -0.1282109
## attitude -0.004073822  1.000000000  0.05566725  0.03205301 -0.1424926
## deep     -0.025835471  0.055667246  1.00000000  0.04761982 -0.3028222
## stra      0.082805932  0.032053011  0.04761982  1.00000000 -0.1195590
## surf     -0.128210938 -0.142492609 -0.30282221 -0.11955903  1.0000000
## 
## $p.value
##                age   attitude         deep      stra         surf
## age      0.0000000 0.95883864 7.433945e-01 0.2933205 1.028842e-01
## attitude 0.9588386 0.00000000 4.803189e-01 0.6846100 6.960177e-02
## deep     0.7433945 0.48031888 0.000000e+00 0.5460878 8.527154e-05
## stra     0.2933205 0.68460999 5.460878e-01 0.0000000 1.284762e-01
## surf     0.1028842 0.06960177 8.527154e-05 0.1284762 0.000000e+00
## 
## $statistic
##                  age    attitude       deep       stra      surf
## age       0.00000000 -0.05169143 -0.3279248  1.0543103 -1.640352
## attitude -0.05169143  0.00000000  0.7074351  0.4069162 -1.826668
## deep     -0.32792484  0.70743513  0.0000000  0.6049140 -4.031682
## stra      1.05431032  0.40691620  0.6049140  0.0000000 -1.527994
## surf     -1.64035239 -1.82666808 -4.0316824 -1.5279942  0.000000
## 
## $n
## [1] 166
## 
## $gp
## [1] 3
## 
## $method
## [1] "pearson"

Conclusions

Overall power of the fitted models to explain points variablity is limited. Adjusted R2 is quite low for all explored models and residual errors remain quite high. Based on F-statistics p-value, and observation that attitude was the only significant variable associated with points, single regression model could be sufficient model to use for rougly estimate student’s future achievements in exams. The better the attitude of a student is, the higher exam points they are likely to have which makes sense.
However, olsrr library function pinpointed a multiple regression model (attitude, stra and age) that explains points variation even better. Rough interpretation of this model:
1. Higher attitude -> Higher points (positive correlation)
2. Higher strategic approach -> Higher points (positive correlation)
3. Older age -> Lower points (negative correlation)
4. Impact of these three variables on points is: attitude > stra > age


Chapter 3: Logistic regression analysis to predict alcohol consumption

In this exercise I analyze preprocessed data.

Original data and preprocessing

Original data sets consist of students from Math (1) and Portuguese language (2) classes who answered to several questions to assess their economical, family and activity status, and variables related to studying and alcoholc consumption. In data pre-processing, individual students were identified based on combination of 13 variables and only students present in both datasets were selected.

Alcohol use was evaluated numerically, separately for weekdays and weekends. To quantify overall consumption, average for these 2 variables was calculated into column alc_use. To group students into high and low alcohol use, threshold of 2 was applied to identify students with high alcohol use in a column high_use.

Data analysis

Read and explore the data

data <- read.csv('data/create_alc.csv', sep=',')
head(data)
##   school sex age address famsize Pstatus Medu Fedu     Mjob     Fjob     reason
## 1     GP   F  18       U     GT3       A    4    4  at_home  teacher     course
## 2     GP   F  17       U     GT3       T    1    1  at_home    other     course
## 3     GP   F  15       U     LE3       T    1    1  at_home    other      other
## 4     GP   F  15       U     GT3       T    4    2   health services       home
## 5     GP   F  16       U     GT3       T    3    3    other    other       home
## 6     GP   M  16       U     LE3       T    4    3 services    other reputation
##   nursery internet guardian traveltime studytime failures schoolsup famsup paid
## 1     yes       no   mother          2         2        0       yes     no   no
## 2      no      yes   father          1         2        0        no    yes   no
## 3     yes      yes   mother          1         2        2       yes     no  yes
## 4     yes      yes   mother          1         3        0        no    yes  yes
## 5     yes       no   father          1         2        0        no    yes  yes
## 6     yes      yes   mother          1         2        0        no    yes  yes
##   activities higher romantic famrel freetime goout Dalc Walc health absences G1
## 1         no    yes       no      4        3     4    1    1      3        5  2
## 2         no    yes       no      5        3     3    1    1      3        3  7
## 3         no    yes       no      4        3     2    2    3      3        8 10
## 4        yes    yes      yes      3        2     2    1    1      5        1 14
## 5         no    yes       no      4        3     2    1    2      5        2  8
## 6        yes    yes       no      5        4     2    1    2      5        8 14
##   G2 G3 alc_use high_use
## 1  8  8     1.0    FALSE
## 2  8  8     1.0    FALSE
## 3 10 11     2.5     TRUE
## 4 14 14     1.0    FALSE
## 5 12 12     1.5    FALSE
## 6 14 14     1.5    FALSE
dim(data)
## [1] 382  35
str(data)
## 'data.frame':    382 obs. of  35 variables:
##  $ school    : Factor w/ 2 levels "GP","MS": 1 1 1 1 1 1 1 1 1 1 ...
##  $ sex       : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
##  $ age       : int  18 17 15 15 16 16 16 17 15 15 ...
##  $ address   : Factor w/ 2 levels "R","U": 2 2 2 2 2 2 2 2 2 2 ...
##  $ famsize   : Factor w/ 2 levels "GT3","LE3": 1 1 2 1 1 2 2 1 2 1 ...
##  $ Pstatus   : Factor w/ 2 levels "A","T": 1 2 2 2 2 2 2 1 1 2 ...
##  $ Medu      : int  4 1 1 4 3 4 2 4 3 3 ...
##  $ Fedu      : int  4 1 1 2 3 3 2 4 2 4 ...
##  $ Mjob      : Factor w/ 5 levels "at_home","health",..: 1 1 1 2 3 4 3 3 4 3 ...
##  $ Fjob      : Factor w/ 5 levels "at_home","health",..: 5 3 3 4 3 3 3 5 3 3 ...
##  $ reason    : Factor w/ 4 levels "course","home",..: 1 1 3 2 2 4 2 2 2 2 ...
##  $ nursery   : Factor w/ 2 levels "no","yes": 2 1 2 2 2 2 2 2 2 2 ...
##  $ internet  : Factor w/ 2 levels "no","yes": 1 2 2 2 1 2 2 1 2 2 ...
##  $ guardian  : Factor w/ 3 levels "father","mother",..: 2 1 2 2 1 2 2 2 2 2 ...
##  $ traveltime: int  2 1 1 1 1 1 1 2 1 1 ...
##  $ studytime : int  2 2 2 3 2 2 2 2 2 2 ...
##  $ failures  : int  0 0 2 0 0 0 0 0 0 0 ...
##  $ schoolsup : Factor w/ 2 levels "no","yes": 2 1 2 1 1 1 1 2 1 1 ...
##  $ famsup    : Factor w/ 2 levels "no","yes": 1 2 1 2 2 2 1 2 2 2 ...
##  $ paid      : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 1 1 2 2 ...
##  $ activities: Factor w/ 2 levels "no","yes": 1 1 1 2 1 2 1 1 1 2 ...
##  $ higher    : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
##  $ romantic  : Factor w/ 2 levels "no","yes": 1 1 1 2 1 1 1 1 1 1 ...
##  $ famrel    : int  4 5 4 3 4 5 4 4 4 5 ...
##  $ freetime  : int  3 3 3 2 3 4 4 1 2 5 ...
##  $ goout     : int  4 3 2 2 2 2 4 4 2 1 ...
##  $ Dalc      : int  1 1 2 1 1 1 1 1 1 1 ...
##  $ Walc      : int  1 1 3 1 2 2 1 1 1 1 ...
##  $ health    : int  3 3 3 5 5 5 3 1 1 5 ...
##  $ absences  : int  5 3 8 1 2 8 0 4 0 0 ...
##  $ G1        : int  2 7 10 14 8 14 12 8 16 13 ...
##  $ G2        : int  8 8 10 14 12 14 12 9 17 14 ...
##  $ G3        : int  8 8 11 14 12 14 12 10 18 14 ...
##  $ alc_use   : num  1 1 2.5 1 1.5 1.5 1 1 1 1 ...
##  $ high_use  : logi  FALSE FALSE TRUE FALSE FALSE FALSE ...

Dataset has 35 variables and 382 observations (students). Data describes several background informations collected from students attending both math and portuguese language.

Personal hypothesis prior to analysis

Four interesting variables that could predict high alcohol consumption:

  1. absences
    • High alcohol use could lead to more absences
  2. Pstatus (together, apart)
    • Perhaps problematic alcohol use could be more likely in broken families
  3. sex
    • Male gender could predispose to high alcohol use
  4. goout
    • Could lead to higher alcohol use (being problematic use or not) because students going out often involves alcohol
interesting_variables <- c('absences', 'Pstatus', 'sex', 'goout', 'high_use')

Graphical data exploration & Summary statistics

library(dplyr)
library(tidyr)
library(ggplot2)
library(gridExtra)

interesting <- select(data, one_of(interesting_variables))
gather(interesting) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()

sex_plot <- ggplot(interesting, aes(sex, fill = high_use))
p1 <- sex_plot + geom_bar(position = 'dodge') + ggtitle('Sex')

Pstatus_plot <- ggplot(interesting, aes(Pstatus, fill = high_use))
p2 <- Pstatus_plot + geom_bar(position = 'dodge') + ggtitle('Pstatus')

absences_plot <- ggplot(interesting, aes(x = high_use, y = absences, col = sex))
p3 <- absences_plot + geom_boxplot() + ylab("Number of absences") + ggtitle('Absences')

goout_plot <- ggplot(interesting, aes(x = high_use, y = goout, col = sex))
p4 <- goout_plot + geom_boxplot() + ylab("Going out") + ggtitle('Go out')

grid.arrange(p1, p2, p3, p4, nrow=2)

Findings:
1. Sex: Larger fraction of male students have high alcohol consumption than female. Initial hypothesis would hold true.
2. Parents together/separated: Most of the parents are together. Hard to say from the plot if fraction of student with high alcohol consumption would be found in separated families (effect is not as dramatic as with sex).
3. Absences: Data is quite spread especially among female students. Trend of more absences in high alcohol use is clearer in males.
4. Go out: Again positive correlation between going out and high alcohol use, which again clearer in male students.

My hypotheses are not completely useless at least.

Summary statistics by group

Below is table showing mean values for absences and goout between low and high alcohol use, separately for male and female students. It seems that both measured variables have higher mean value in high alcohol use group than in low use, this is true for both sexes. However, difference may not be statistically significant (based on how plots look above). Anyway, these variables may have some predictive power for logistic model.

interesting %>% group_by(sex, high_use) %>% summarise(count = n(), mean_absences = mean(absences), mean_goout = mean(goout))
## # A tibble: 4 x 5
## # Groups:   sex [2]
##   sex   high_use count mean_absences mean_goout
##   <fct> <lgl>    <int>         <dbl>      <dbl>
## 1 F     FALSE      156          4.22       2.96
## 2 F     TRUE        42          6.79       3.36
## 3 M     FALSE      112          2.98       2.71
## 4 M     TRUE        72          6.12       3.93

Logistic regression fitting

model1 <- glm(high_use ~ sex + Pstatus + absences + goout, data = interesting, family = 'binomial')

summary(model1)
## 
## Call:
## glm(formula = high_use ~ sex + Pstatus + absences + goout, family = "binomial", 
##     data = interesting)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.7870  -0.8152  -0.5445   0.8351   2.4742  
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -4.160820   0.600994  -6.923 4.41e-12 ***
## sexM         0.958756   0.254654   3.765 0.000167 ***
## PstatusT    -0.002673   0.418684  -0.006 0.994905    
## absences     0.084166   0.022546   3.733 0.000189 ***
## goout        0.729844   0.119827   6.091 1.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 387.75  on 377  degrees of freedom
## AIC: 397.75
## 
## Number of Fisher Scoring iterations: 4
coef(model1)
##  (Intercept)         sexM     PstatusT     absences        goout 
## -4.160819757  0.958756213 -0.002673494  0.084165775  0.729844092
  1. Sex, male: positive effect on high_use
  2. Pstatus, together: non-significant effect on high_use
  3. Absences: positive effect on high_use
  4. Go out: positive effect on high_use

Deviance = measure of goodness of fit, higher number indicates worse fit
- Null deviance = When only intercept is included to the model
- Residual deviance = Includes all variables in the model -> Residual deviance is a bit lower than null

AIC = compare different models, lower is better

Odds ratio and confidence intervals

OR <- coef(model1) %>% exp
CI <- confint(model1) %>% exp
## Waiting for profiling to be done...
cbind(OR, CI)
##                     OR       2.5 %     97.5 %
## (Intercept) 0.01559477 0.004551627 0.04836831
## sexM        2.60845010 1.592994654 4.33230471
## PstatusT    0.99733008 0.448239957 2.34168647
## absences    1.08780921 1.042121298 1.13966390
## goout       2.07475711 1.649860733 2.64195732

Odds ratio >1 implies increased likelyhood that student’s consumption of alcohol is high. If 1 is included in the CI, it means that interval spans from below 1 to more than 1 and the variable has no predictive power on dependent variable. Thus, from CI and OR table can be concluded that male sex and going out increase the likelyhood that person’s alcohol consumption is high. Absences seem to have small effect whereas Pstatus does not have effect on alcohol consumtpion (also seen from model summary where this variable is the only one that is not significant).

Analysis of predictive power

Since Pstatus was not significant variable, I will fit a new model (model2) without it and evaluate how well it predicts high_use in this same data set. First I am calculating probabilities of high_use with the model2 and appending this information in dataframe interesting that contains all my interesting variables. Additionally, I am generating a column for prediction of high_use that gets value TRUE if probability is >0.5 and FALSE if <0.5.

model2 <- glm(high_use ~ sex + absences + goout, data = interesting, family = 'binomial')

probabilities <- predict(model2, type = 'response')

interesting <- mutate(interesting, probability = probabilities)

interesting <- mutate(interesting, predicted_high_use = probability > 0.5)

head(interesting)
##   absences Pstatus sex goout high_use probability predicted_high_use
## 1        5       A   F     4    FALSE  0.30512370              FALSE
## 2        3       T   F     3    FALSE  0.15171754              FALSE
## 3        8       T   F     2     TRUE  0.11608053              FALSE
## 4        1       T   F     2    FALSE  0.06790218              FALSE
## 5        2       T   F     2    FALSE  0.07342805              FALSE
## 6        8       T   M     2    FALSE  0.25514399              FALSE

To assess model’s prediction power, I am checking how often prediction and real values are matching

table(high_use = interesting$high_use, prediction = interesting$predicted_high_use)
##         prediction
## high_use FALSE TRUE
##    FALSE   253   15
##    TRUE     65   49

Visualizing

g <- ggplot(interesting, aes(x = probability, y = high_use, col = predicted_high_use))
g + geom_point()

# tabulate the target variable versus the predictions
table(high_use = interesting$high_use, prediction = interesting$predicted_high_use) %>% prop.table %>% addmargins
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.66230366 0.03926702 0.70157068
##    TRUE  0.17015707 0.12827225 0.29842932
##    Sum   0.83246073 0.16753927 1.00000000
sensitivity <- 0.1283 / 0.2984 * 100
specificity <- 0.6623 / 0.7016 * 100

Model sensitivity is 42.9959786 % and specificity 94.3985177 %, meaning that it is not predicting false positives too easily but is more vulnerable to miss true positives (predicts more false negatives).

Step-wise best model selection

Since 4 interesting variables were chosen quite randomly in the first place, I want to explore options to find the best model more automatically. Here I will use step() function to test models with different variables included.

#select all except Dalc and Walc
variables_for_step <- dplyr::select(data, -Dalc, -Walc)
#select independent variables (all except alc_use which are two last columns)
names_variables <- colnames(variables_for_step)
variables_used <- names_variables[seq_len(length(names_variables)-2)]

#Constructing formula to use for step-wise model search
Formula <- formula(paste("high_use ~ ", paste(variables_used, collapse=" + ")))

Running step() function with build Formula. It takes some time and output is very long so output is hidden.

model3 <- glm(Formula, data = variables_for_step, family = 'binomial')
step(model3, direction = "backward")

Below is the function with lowest AIC found by step()

#Final best fit variables
model_best <- glm(high_use ~ sex + address + Fjob + traveltime + studytime + paid + activities + famrel + freetime + goout + absences, family = "binomial",
    data = variables_for_step)

Compare 3 logistic models

I have fitted three models: model1, model3, model_best and wanted to compare these to each other with anova and collect model parameters in one table.

anova <- anova(model_best, model1, model2)
anova
## Analysis of Deviance Table
## 
## Model 1: high_use ~ sex + address + Fjob + traveltime + studytime + paid + 
##     activities + famrel + freetime + goout + absences
## Model 2: high_use ~ sex + Pstatus + absences + goout
## Model 3: high_use ~ sex + absences + goout
##   Resid. Df Resid. Dev  Df Deviance
## 1       367     348.24             
## 2       377     387.75 -10  -39.519
## 3       378     387.75  -1    0.000
library(stargazer)
table <- stargazer(model_best, model1, model2, title='Comparison of logistic regression models', type="html", align = TRUE)
Comparison of logistic regression models
Dependent variable:
high_use
(1) (2) (3)
sexM 0.938*** 0.959*** 0.959***
(0.292) (0.255) (0.255)
addressU -0.634*
(0.341)
Fjobhealth 0.827
(1.057)
Fjobother 0.694
(0.834)
Fjobservices 1.408*
(0.854)
Fjobteacher -0.0004
(0.971)
traveltime 0.298
(0.203)
studytime -0.392**
(0.183)
paidyes 0.610**
(0.278)
activitiesyes -0.527*
(0.278)
famrel -0.458***
(0.152)
freetime 0.214
(0.151)
PstatusT -0.003
(0.419)
goout 0.766*** 0.730*** 0.730***
(0.135) (0.120) (0.120)
absences 0.082*** 0.084*** 0.084***
(0.023) (0.023) (0.022)
Constant -3.200*** -4.161*** -4.163***
(1.216) (0.601) (0.475)
Observations 382 382 382
Log Likelihood -174.118 -193.877 -193.877
Akaike Inf. Crit. 378.236 397.755 395.755
Note: p<0.1; p<0.05; p<0.01

Based on Residual Deviances and AIC, best_model would be the best fit. However, it is quite complex including 15 variables so I want to see how much better it actually performs.
Predicting high_use with 3 models:

probabilities1 <- predict(model1, type = 'response')
probabilities2 <- predict(model2, type = 'response')
probabilities_model_best <- predict(model_best, type = 'response') 

predictions_3_models <- mutate(variables_for_step, probability_model1 = probabilities1, probability_model2 = probabilities2, probability_best = probabilities_model_best)

predictions_3_models <- mutate(predictions_3_models, predicted_high_use_m1 = probability_model1 > 0.5, predicted_high_use_m2 = probability_model2 > 0.5, 
                               predicted_high_use_best = probability_best > 0.5)

2x2 tables for models

model1_table <- table(high_use = predictions_3_models$high_use, prediction_model1 = predictions_3_models$predicted_high_use_m1) %>% prop.table %>% addmargins
model2_table <- table(high_use = predictions_3_models$high_use, prediction_model2 = predictions_3_models$predicted_high_use_m2) %>% prop.table %>% addmargins
best_model_table <- table(high_use = predictions_3_models$high_use, prediction_best_model = predictions_3_models$predicted_high_use_best) %>% prop.table %>% addmargins

model1_table
##         prediction_model1
## high_use      FALSE       TRUE        Sum
##    FALSE 0.66230366 0.03926702 0.70157068
##    TRUE  0.17015707 0.12827225 0.29842932
##    Sum   0.83246073 0.16753927 1.00000000
model2_table
##         prediction_model2
## high_use      FALSE       TRUE        Sum
##    FALSE 0.66230366 0.03926702 0.70157068
##    TRUE  0.17015707 0.12827225 0.29842932
##    Sum   0.83246073 0.16753927 1.00000000
best_model_table
##         prediction_best_model
## high_use      FALSE       TRUE        Sum
##    FALSE 0.65445026 0.04712042 0.70157068
##    TRUE  0.14659686 0.15183246 0.29842932
##    Sum   0.80104712 0.19895288 1.00000000
# Calculate sensitivity and specificity for all models 
sensitivity_m1 <- 0.1283 / 0.2984 * 100
specificity_m1 <- 0.6623 / 0.7016 * 100


sensitivity_m2 <- 0.1283 / 0.2984 * 100
specificity_m2 <- 0.6623 / 0.7016 * 100

sensitivity_best <- 0.1518 / 0.2984 * 100
specificity_best <- 0.6545 / 0.7016 * 100

#Collect accuracy parameters in one df
model_accuracy <- data.frame("Model" = c('Model 1', 'Model 2', 'Best model'), "Sensitivity" = c(sensitivity_m1, sensitivity_m2, sensitivity_best), "Specificity" = c(specificity_m1, specificity_m2, specificity_best))

print(model_accuracy, digits = 3)
##        Model Sensitivity Specificity
## 1    Model 1        43.0        94.4
## 2    Model 2        43.0        94.4
## 3 Best model        50.9        93.3

Graphical visualization of models

#Nice examples for plot arrangements http://www.sthda.com/english/wiki/wiki.php?id_contents=7930
library(cowplot)
m1_plot <- ggplot(predictions_3_models, aes(x = probability_model1, y = high_use, col = predicted_high_use_m1))
m2_plot <- ggplot(predictions_3_models, aes(x = probability_model2, y = high_use, col = predicted_high_use_m2))
best_plot <- ggplot(predictions_3_models, aes(x = probability_best, y = high_use, col = predicted_high_use_best))

g1 <- m1_plot + geom_point() + ggtitle('Model 1')
g2 <- m2_plot + geom_point() + ggtitle('Model 2')
g3 <- best_plot + geom_point() + ggtitle('Best model')

plot_grid(g1, g2, g3, labels=c("A", "B", "C"), ncol = 1, nrow = 3)

Conclusions

Selection of a model is always a trade of between specificity, sensitivity and complexity of the model. If we allow more complex model, I would select best_model because of significantly improved sensitivity compared to other models, even if there is slight drop in specificity.


Chapter 4: Classification and clustering

In this exercise I analyze Bostondata set available in MASS package.

Loading MASS and Boston data set

Boston data set contains information from 506 housing observation (rows) with 14 different variables (columns).

library(MASS)
data('Boston')

str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Visualize and summarize data

Nice corrplot modification examples https://cran.r-project.org/web/packages/corrplot/vignettes/corrplot-intro.html

library(corrplot)
library(RColorBrewer)


correlations <- cor(Boston)
round(correlations, digits = 2)
##          crim    zn indus  chas   nox    rm   age   dis   rad   tax ptratio
## crim     1.00 -0.20  0.41 -0.06  0.42 -0.22  0.35 -0.38  0.63  0.58    0.29
## zn      -0.20  1.00 -0.53 -0.04 -0.52  0.31 -0.57  0.66 -0.31 -0.31   -0.39
## indus    0.41 -0.53  1.00  0.06  0.76 -0.39  0.64 -0.71  0.60  0.72    0.38
## chas    -0.06 -0.04  0.06  1.00  0.09  0.09  0.09 -0.10 -0.01 -0.04   -0.12
## nox      0.42 -0.52  0.76  0.09  1.00 -0.30  0.73 -0.77  0.61  0.67    0.19
## rm      -0.22  0.31 -0.39  0.09 -0.30  1.00 -0.24  0.21 -0.21 -0.29   -0.36
## age      0.35 -0.57  0.64  0.09  0.73 -0.24  1.00 -0.75  0.46  0.51    0.26
## dis     -0.38  0.66 -0.71 -0.10 -0.77  0.21 -0.75  1.00 -0.49 -0.53   -0.23
## rad      0.63 -0.31  0.60 -0.01  0.61 -0.21  0.46 -0.49  1.00  0.91    0.46
## tax      0.58 -0.31  0.72 -0.04  0.67 -0.29  0.51 -0.53  0.91  1.00    0.46
## ptratio  0.29 -0.39  0.38 -0.12  0.19 -0.36  0.26 -0.23  0.46  0.46    1.00
## black   -0.39  0.18 -0.36  0.05 -0.38  0.13 -0.27  0.29 -0.44 -0.44   -0.18
## lstat    0.46 -0.41  0.60 -0.05  0.59 -0.61  0.60 -0.50  0.49  0.54    0.37
## medv    -0.39  0.36 -0.48  0.18 -0.43  0.70 -0.38  0.25 -0.38 -0.47   -0.51
##         black lstat  medv
## crim    -0.39  0.46 -0.39
## zn       0.18 -0.41  0.36
## indus   -0.36  0.60 -0.48
## chas     0.05 -0.05  0.18
## nox     -0.38  0.59 -0.43
## rm       0.13 -0.61  0.70
## age     -0.27  0.60 -0.38
## dis      0.29 -0.50  0.25
## rad     -0.44  0.49 -0.38
## tax     -0.44  0.54 -0.47
## ptratio -0.18  0.37 -0.51
## black    1.00 -0.37  0.33
## lstat   -0.37  1.00 -0.74
## medv     0.33 -0.74  1.00
colors <- brewer.pal(n = 9, name = "Pastel1")
signf_test <- cor.mtest(Boston, conf.level = .95)

corrplot(correlations, type = 'upper', method = 'ellipse', order = "hclust", col = brewer.pal(n = 8, name = "PiYG"), bg = colors[length(colors)], 
         p.mat = signf_test$p, insig = 'p-value', sig.level = .05, tl.col = "black", tl.srt = 90)

Above corrplot shows negative correlations in pink and positive correlations in green. Narrowness of method = 'ellipse' indicates how high correlation is. For non-significant correlations (p>0.05), p-values are shown.

From the graph it can be observed that most of the variables correlate significantly with others. Only few pairs are not significantly correlated.

Scale variables and categorical crimes

To be able to accurately classify the data, variable values need to be scaled so that all variables have a mean value of 0. It is done as follows (when all variables are numerical, as expected for classification analysis):

scaled <- as.data.frame(scale(Boston))
class(scaled)
## [1] "data.frame"
str(scaled)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  -0.419 -0.417 -0.417 -0.416 -0.412 ...
##  $ zn     : num  0.285 -0.487 -0.487 -0.487 -0.487 ...
##  $ indus  : num  -1.287 -0.593 -0.593 -1.306 -1.306 ...
##  $ chas   : num  -0.272 -0.272 -0.272 -0.272 -0.272 ...
##  $ nox    : num  -0.144 -0.74 -0.74 -0.834 -0.834 ...
##  $ rm     : num  0.413 0.194 1.281 1.015 1.227 ...
##  $ age    : num  -0.12 0.367 -0.266 -0.809 -0.511 ...
##  $ dis    : num  0.14 0.557 0.557 1.077 1.077 ...
##  $ rad    : num  -0.982 -0.867 -0.867 -0.752 -0.752 ...
##  $ tax    : num  -0.666 -0.986 -0.986 -1.105 -1.105 ...
##  $ ptratio: num  -1.458 -0.303 -0.303 0.113 0.113 ...
##  $ black  : num  0.441 0.441 0.396 0.416 0.441 ...
##  $ lstat  : num  -1.074 -0.492 -1.208 -1.36 -1.025 ...
##  $ medv   : num  0.16 -0.101 1.323 1.182 1.486 ...
summary(scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

Next, crim is cut to categorical variable according to quantiles to be able to later use it to train the model to predict the right crime rate class of an observation based on other variables.

bins = quantile(scaled$crim)

crime <- cut(scaled$crim, breaks = bins, label = c('low', 'med_low', 'med_high', 'high'), include.lowest = TRUE)

#count table for each category level
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
#replace original crim with categorical crime variable

scaled <- dplyr::select(scaled, -crim)
scaled <- data.frame(scaled, crime)
head(scaled)
##           zn      indus       chas        nox        rm        age      dis
## 1  0.2845483 -1.2866362 -0.2723291 -0.1440749 0.4132629 -0.1198948 0.140075
## 2 -0.4872402 -0.5927944 -0.2723291 -0.7395304 0.1940824  0.3668034 0.556609
## 3 -0.4872402 -0.5927944 -0.2723291 -0.7395304 1.2814456 -0.2655490 0.556609
## 4 -0.4872402 -1.3055857 -0.2723291 -0.8344581 1.0152978 -0.8090878 1.076671
## 5 -0.4872402 -1.3055857 -0.2723291 -0.8344581 1.2273620 -0.5106743 1.076671
## 6 -0.4872402 -1.3055857 -0.2723291 -0.8344581 0.2068916 -0.3508100 1.076671
##          rad        tax    ptratio     black      lstat       medv crime
## 1 -0.9818712 -0.6659492 -1.4575580 0.4406159 -1.0744990  0.1595278   low
## 2 -0.8670245 -0.9863534 -0.3027945 0.4406159 -0.4919525 -0.1014239   low
## 3 -0.8670245 -0.9863534 -0.3027945 0.3960351 -1.2075324  1.3229375   low
## 4 -0.7521778 -1.1050216  0.1129203 0.4157514 -1.3601708  1.1815886   low
## 5 -0.7521778 -1.1050216  0.1129203 0.4406159 -1.0254866  1.4860323   low
## 6 -0.7521778 -1.1050216  0.1129203 0.4101651 -1.0422909  0.6705582   low

Divide data for training and test sets

To be able to evaluate how well our model is predicting crime rate, I want to separate small fraction of the data (20%) for testing it, so it will not be used for training the model. Observations are selected randomly below for training or test sets.

random_test_rows <- sample(nrow(scaled), size = nrow(scaled) * 0.2)

test_set <- scaled[random_test_rows, ]
train_set <- scaled[-random_test_rows, ]

#Check that resulting dfs are as should
dim(test_set)
## [1] 101  14
dim(train_set)
## [1] 405  14

Fitting linear discriminant analysis for crimes

Fitting classification model with lda() function using crimes as a categorical variable and all other (continuous) variables as predicting variables.

lda_fit <- lda(crime ~ ., data = train_set)
lda_fit
## Call:
## lda(crime ~ ., data = train_set)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2419753 0.2419753 0.2469136 0.2691358 
## 
## Group means:
##                   zn      indus        chas        nox         rm        age
## low       0.92464266 -0.8897982 -0.11163110 -0.8688892  0.4045201 -0.8910500
## med_low  -0.09981463 -0.3505417 -0.03128211 -0.6011984 -0.1161716 -0.4377022
## med_high -0.39119540  0.1474737  0.23949396  0.3493765  0.1096125  0.3846372
## high     -0.48724019  1.0169738 -0.05560795  1.0365437 -0.4662013  0.7985513
##                 dis        rad        tax     ptratio       black       lstat
## low       0.8878023 -0.6900668 -0.6908332 -0.41921351  0.37408109 -0.77216582
## med_low   0.4462575 -0.5459225 -0.5422557 -0.06571449  0.32076316 -0.16311272
## med_high -0.3564848 -0.4432402 -0.3497577 -0.31203261  0.07854186 -0.02461304
## high     -0.8438738  1.6395837  1.5150965  0.78247128 -0.84181805  0.91288320
##                 medv
## low       0.45187802
## med_low   0.01917749
## med_high  0.21563241
## high     -0.70881495
## 
## Coefficients of linear discriminants:
##                 LD1          LD2         LD3
## zn       0.10130194  0.741559208 -1.05278982
## indus    0.08220493 -0.259883138  0.41225872
## chas    -0.03956870 -0.032951051  0.06235607
## nox      0.40009907 -0.715980474 -1.11805034
## rm       0.01774001 -0.038318323 -0.15953910
## age      0.14696447 -0.335084938 -0.17233812
## dis     -0.10887013 -0.272311544  0.48362853
## rad      3.58923092  0.788480200  0.34997791
## tax      0.07917858  0.152278380 -0.02939385
## ptratio  0.15043010 -0.001825406 -0.23803248
## black   -0.06553807  0.031465646  0.09389286
## lstat    0.19614666 -0.289896254  0.42798884
## medv     0.08918889 -0.489129272 -0.13152735
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9610 0.0295 0.0095

Visualization of model

Using ggsci color palette and ggord package to visualize lda_fit. To install ggord package from Github, I use install_github function from devtools package.

#Convert crime factor levels to numeric to plot in different colors
crime_levels <- as.numeric(train_set$crime)

#I hate the default colors of the plot, so I'm using ggsci package palettes instead
#Good source for color palettes https://www.datanovia.com/en/blog/top-r-color-palettes-to-know-for-great-data-visualization/
library(ggsci)
library(devtools)
install_github("fawda123/ggord")
library(ggord)

cols <- pal_aaas()(4)


#Plot with nicer colors

ggord(lda_fit, train_set$crime, poly = FALSE, arrow=.3, veclsz = .5, vec_ext = 4, size=1, cols = cols)

There are so many variables in the model, that the arrows look a bit messy. However, it is easy to see still which variables affect the classification most (zn, rad, nox). This ggord package was very nice and easy to use. You can see that the model does not classify crime rates perfectly but I would say it does pretty good job distinguishing high crime rate from others in train_set.

Test model

First need to create dataframe with correct crime classes for the test data and remove crime variable from test data that is used to predict the classes with the lda_fit.

#Save correct classes to variable
correct_classes <- test_set$crime

#Remove classes from test data
test_set <- dplyr::select(test_set, -crime)

#Predict classes with model
lda_predict <- predict(lda_fit, newdata = test_set)

#Make 2X2 table to observe model accuracy

table(correct = correct_classes, predicted = lda_predict$class)
##           predicted
## correct    low med_low med_high high
##   low       19       7        3    0
##   med_low    6      10       12    0
##   med_high   0       6       17    3
##   high       0       0        0   18
#Calculate percentage of right predictions on test data
percent_correct <- 100 * mean(lda_predict$class==correct_classes)
percent_correct <- round(percent_correct, digits = 0)

percent_correct
## [1] 63

Above analysis of the model shows, that it predicted the crime class for 63 % of test_set data correctly. Prediction accuracy is 100 % for high crime rate but less accurate for lower crime level classes. The worst accuracy is for med_low crime rate where almost half of the test data was classified wrong.

K-means clustering

I am reading again Boston data set and scaling it for clustering by K-means. First I am calculating Euclidean distances:

data(Boston)
scaled_kmeans <- as.data.frame(scale(Boston))
eu_dist <- dist(scaled_kmeans)
summary(eu_dist)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970

Next I am running k-means clustering using defined seed:

library(ggplot2)
set.seed(123)

#Setting maximum number of clusters
max_seeds <- 10

#Finding optimal number of clusters with so called elbow method
twcss <- sapply(1:max_seeds, function(k) {kmeans(scaled_kmeans, k)$tot.withinss})
qplot(x = 1:max_seeds, y = twcss, geom = 'line')

I decided to use 3 clusters because twcss is still decreasing and I was not satisfied how 2 or more than 3 clusters looked like (I tested 2, 4 and 8 clusters).

set.seed(123)
km <- kmeans(scaled_kmeans, centers = 3)

cols <- pal_futurama()(3)
cols_clusters <- cols[km$cluster]
pairs(scaled_kmeans, col = cols_clusters)

Bonus: LDA using k-means clusters

Fitting lda() using k-means clusters as dependent variables and all variables in data set as explanatory variables. Data used is scaled Boston data.

lda_kmeans <- lda(km$cluster ~ ., data = scaled_kmeans)
lda_kmeans
## Call:
## lda(km$cluster ~ ., data = scaled_kmeans)
## 
## Prior probabilities of groups:
##         1         2         3 
## 0.2806324 0.3992095 0.3201581 
## 
## Group means:
##         crim         zn        indus        chas         nox         rm
## 1  0.9693718 -0.4872402  1.074440092 -0.02279455  1.04197430 -0.4146077
## 2 -0.3549295 -0.4039269  0.009294842  0.11748284  0.01531993 -0.2547135
## 3 -0.4071299  0.9307491 -0.953383032 -0.12651054 -0.93243813  0.6810272
##          age        dis        rad        tax     ptratio      black
## 1  0.7666895 -0.8346743  1.5010821  1.4852884  0.73584205 -0.7605477
## 2  0.3096462 -0.2267757 -0.5759279 -0.4964651 -0.09219308  0.2473725
## 3 -1.0581385  1.0143978 -0.5976310 -0.6828704 -0.53004055  0.3582008
##         lstat       medv
## 1  0.85963373 -0.6874933
## 2  0.09168925 -0.1052456
## 3 -0.86783467  0.7338497
## 
## Coefficients of linear discriminants:
##                 LD1         LD2
## crim     0.03654114  0.20373943
## zn      -0.08346821  0.34784463
## indus   -0.32262409 -0.12105014
## chas    -0.04761479 -0.13327215
## nox     -0.13026254  0.15610984
## rm       0.13267423  0.44058946
## age     -0.11936644 -0.84880847
## dis      0.23454618  0.58819732
## rad     -1.96894437  0.57933028
## tax     -1.10861600  0.53984421
## ptratio -0.13087741 -0.02004405
## black    0.15432491 -0.06106305
## lstat   -0.14002173  0.14786473
## medv     0.02559139  0.37307811
## 
## Proportion of trace:
##    LD1    LD2 
## 0.8999 0.1001

To visualize fitted model, I use again ggord function. For it to work, km$clusters need to be converted to factor() because it can’t be in numeric form for this function. Using same colors as before.

cols <- pal_aaas()(3)

ggord(lda_kmeans, factor(km$cluster), poly = FALSE, arrow=.3, veclsz = .5, vec_ext = 4, size=1, cols = cols)

All 3 clusters can be separated quite nicely from each other, although only cluster 2 is clearly distinct from two others. Clustering is anyway better than clusters for crime rates as target classes. In this model, the most influential variables are tax, rad and age. However, it was clear that everytime k-means is executed, the clusters formed will be different making the interpretation and meaning of different clusters quite difficult.

Below is shown LDA for k-means with 6 clusters and this shows crim and black as most influential variables but not all cluster separate nicely based on LD1 and LD2 that explain around 70 % of effect.

#6 clusters

set.seed(123)
km6 <- kmeans(scaled_kmeans, centers = 6)

lda_kmeans6 <- lda(km6$cluster ~ ., data = scaled_kmeans)

cols <- pal_aaas()(6)

ggord(lda_kmeans6, factor(km6$cluster), poly = FALSE, arrow=.3, veclsz = .5, vec_ext = 4, size=1, cols = cols)

Super-bonus

model_predictors <- dplyr::select(train_set, -crime)
# check the dimensions
dim(model_predictors)
## [1] 405  13
dim(lda_fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda_fit$scaling
matrix_product <- as.data.frame(matrix_product)

#Next, install and access the plotly package. Create a 3D plot (Cool!) of the columns of the matrix product by typing the code below.
library(plotly)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train_set$crime)

Chapter 5: Dimensionality reduction techniques

Import libraries, read and visualize data

library(ggplot2)
library(GGally)
library(corrplot)
library(dplyr)
library(RColorBrewer)

colors <- brewer.pal(n = 9, name = "Pastel1")

data <- read.csv('http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/human2.txt', sep=',')
dim(data)
## [1] 155   8
str(data)
## 'data.frame':    155 obs. of  8 variables:
##  $ Edu2.FM  : num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labo.FM  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Edu.Exp  : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ Life.Exp : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ GNI      : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ Mat.Mor  : int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ado.Birth: num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Parli.F  : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
summary(data)
##     Edu2.FM          Labo.FM          Edu.Exp         Life.Exp    
##  Min.   :0.1717   Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529   Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967   Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50
ggpairs(data)

corr_matrix <- cor(data) 
corrplot(corr_matrix, type = 'upper', method = 'ellipse', order = "hclust", col = brewer.pal(n = 8, name = "PiYG"), bg = colors[length(colors)], 
         tl.col = "black", tl.srt = 90)

There seems to be quite high correlation between GNI and other variables except Parli.F and Labo.FM which are poorly correlated with any other variable.

PCA - scaled vs non-scaled variables

PCA with non-standardized variables is not informative because variables with very different scales have very high and biased impact on the clustering. It can be seen that now GNIvariable would explain all the variance alone. Thus, analysis should be focused on standardized data when the means and standard deviations of different variables has been transformed to same scale.

pca <- prcomp(data)
pca_scaled <- scale(data) %>% prcomp

summarypca <- summary(pca)
summarypca_scaled <- summary(pca_scaled)

procent <- round(1*summarypca$importance[2, ]*100, digits = 1)
procent_scaled <- round(1*summarypca_scaled$importance[2, ]*100, digits = 1)

labels <- paste0(names(procent), " (", procent, "%)")
labels_scaled <- paste0(names(procent_scaled), " (", procent_scaled, "%)")

layout(matrix(1:2, ncol=2))
biplot(pca, choices = 1:2, col = c('black', 'purple'), cex = c(0.8, 1), xlab = labels[1], ylab = labels[2], main = 'Non-standardized')
biplot(pca_scaled, choices = 1:2, col = c('black', 'purple'), cex = c(0.8, 1), xlab = labels_scaled[1], ylab = labels_scaled[2], main = 'Standardized variables')

summary(pca_scaled)
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6     PC7
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631 0.45900
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595 0.02634
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069 0.98702
##                            PC8
## Standard deviation     0.32224
## Proportion of Variance 0.01298
## Cumulative Proportion  1.00000
pca_scaled
## Standard deviations (1, .., p=8):
## [1] 2.0708380 1.1397204 0.8750485 0.7788630 0.6619563 0.5363061 0.4589994
## [8] 0.3222406
## 
## Rotation (n x k) = (8 x 8):
##                   PC1         PC2         PC3         PC4        PC5
## Edu2.FM   -0.35664370  0.03796058 -0.24223089  0.62678110 -0.5983585
## Labo.FM    0.05457785  0.72432726 -0.58428770  0.06199424  0.2625067
## Edu.Exp   -0.42766720  0.13940571 -0.07340270 -0.07020294  0.1659678
## Life.Exp  -0.44372240 -0.02530473  0.10991305 -0.05834819  0.1628935
## GNI       -0.35048295  0.05060876 -0.20168779 -0.72727675 -0.4950306
## Mat.Mor    0.43697098  0.14508727 -0.12522539 -0.25170614 -0.1800657
## Ado.Birth  0.41126010  0.07708468  0.01968243  0.04986763 -0.4672068
## Parli.F   -0.08438558  0.65136866  0.72506309  0.01396293 -0.1523699
##                   PC6         PC7         PC8
## Edu2.FM    0.17713316  0.05773644  0.16459453
## Labo.FM   -0.03500707 -0.22729927 -0.07304568
## Edu.Exp   -0.38606919  0.77962966 -0.05415984
## Life.Exp  -0.42242796 -0.43406432  0.62737008
## GNI        0.11120305 -0.13711838 -0.16961173
## Mat.Mor    0.17370039  0.35380306  0.72193946
## Ado.Birth -0.76056557 -0.06897064 -0.14335186
## Parli.F    0.13749772  0.00568387 -0.02306476
screeplot(pca_scaled, type="lines", main = 'Scaled variables Screeplot')

It can be seen that almost 70 % of variance is explained by first 2 principal components. screeplot() shows too that data is best explained with two clusters. As already evident from observations from correlation data, it can be seen that Parli.F and Labo.FM are highly contributing to PC2, while other variables are highly explaining PC1.

MCA

This part of the exercise is not fully finished because lack of time…

´tea´ dataset from `FactoMineR´ package consists of factor variables (all except age which is integer). I will select all but age as variables for further analysis (age_Q will be included)

library(FactoMineR)
library(tidyr)

data(tea)

str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
summary(tea)
##          breakfast           tea.time          evening          lunch    
##  breakfast    :144   Not.tea time:131   evening    :103   lunch    : 44  
##  Not.breakfast:156   tea time    :169   Not.evening:197   Not.lunch:256  
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##         dinner           always          home           work    
##  dinner    : 21   always    :103   home    :291   Not.work:213  
##  Not.dinner:279   Not.always:197   Not.home:  9   work    : 87  
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##         tearoom           friends          resto          pub     
##  Not.tearoom:242   friends    :196   Not.resto:221   Not.pub:237  
##  tearoom    : 58   Not.friends:104   resto    : 79   pub    : 63  
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##         Tea         How           sugar                     how     
##  black    : 74   alone:195   No.sugar:155   tea bag           :170  
##  Earl Grey:193   lemon: 33   sugar   :145   tea bag+unpackaged: 94  
##  green    : 33   milk : 63                  unpackaged        : 36  
##                  other:  9                                          
##                                                                     
##                                                                     
##                                                                     
##                   where                 price          age        sex    
##  chain store         :192   p_branded      : 95   Min.   :15.00   F:178  
##  chain store+tea shop: 78   p_cheap        :  7   1st Qu.:23.00   M:122  
##  tea shop            : 30   p_private label: 21   Median :32.00          
##                             p_unknown      : 12   Mean   :37.05          
##                             p_upscale      : 53   3rd Qu.:48.00          
##                             p_variable     :112   Max.   :90.00          
##                                                                          
##            SPC               Sport       age_Q          frequency  
##  employee    :59   Not.sportsman:121   15-24:92   1/day      : 95  
##  middle      :40   sportsman    :179   25-34:69   1 to 2/week: 44  
##  non-worker  :64                       35-44:40   +2/day     :127  
##  other worker:20                       45-59:61   3 to 6/week: 34  
##  senior      :35                       +60  :38                    
##  student     :70                                                   
##  workman     :12                                                   
##              escape.exoticism           spirituality        healthy   
##  escape-exoticism    :142     Not.spirituality:206   healthy    :210  
##  Not.escape-exoticism:158     spirituality    : 94   Not.healthy: 90  
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##          diuretic             friendliness            iron.absorption
##  diuretic    :174   friendliness    :242   iron absorption    : 31   
##  Not.diuretic:126   Not.friendliness: 58   Not.iron absorption:269   
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##          feminine             sophisticated        slimming          exciting  
##  feminine    :129   Not.sophisticated: 85   No.slimming:255   exciting   :116  
##  Not.feminine:171   sophisticated    :215   slimming   : 45   No.exciting:184  
##                                                                                
##                                                                                
##                                                                                
##                                                                                
##                                                                                
##         relaxing              effect.on.health
##  No.relaxing:113   effect on health   : 66    
##  relaxing   :187   No.effect on health:234    
##                                               
##                                               
##                                               
##                                               
## 
tea_data <- dplyr::select(tea, -age)
gather(tea_data) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped

mca <- MCA(tea_data, graph = FALSE)
summary(mca)
## 
## Call:
## MCA(X = tea_data, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7
## Variance               0.090   0.082   0.070   0.063   0.056   0.053   0.050
## % of var.              5.838   5.292   4.551   4.057   3.616   3.465   3.272
## Cumulative % of var.   5.838  11.130  15.681  19.738  23.354  26.819  30.091
##                        Dim.8   Dim.9  Dim.10  Dim.11  Dim.12  Dim.13  Dim.14
## Variance               0.048   0.047   0.044   0.041   0.040   0.039   0.037
## % of var.              3.090   3.053   2.834   2.643   2.623   2.531   2.388
## Cumulative % of var.  33.181  36.234  39.068  41.711  44.334  46.865  49.252
##                       Dim.15  Dim.16  Dim.17  Dim.18  Dim.19  Dim.20  Dim.21
## Variance               0.036   0.035   0.034   0.032   0.031   0.031   0.030
## % of var.              2.302   2.275   2.172   2.085   2.013   2.011   1.915
## Cumulative % of var.  51.554  53.829  56.000  58.086  60.099  62.110  64.025
##                       Dim.22  Dim.23  Dim.24  Dim.25  Dim.26  Dim.27  Dim.28
## Variance               0.028   0.027   0.026   0.025   0.025   0.024   0.024
## % of var.              1.847   1.740   1.686   1.638   1.609   1.571   1.524
## Cumulative % of var.  65.872  67.611  69.297  70.935  72.544  74.115  75.639
##                       Dim.29  Dim.30  Dim.31  Dim.32  Dim.33  Dim.34  Dim.35
## Variance               0.023   0.022   0.021   0.020   0.020   0.019   0.019
## % of var.              1.459   1.425   1.378   1.322   1.281   1.241   1.222
## Cumulative % of var.  77.099  78.523  79.901  81.223  82.504  83.745  84.967
##                       Dim.36  Dim.37  Dim.38  Dim.39  Dim.40  Dim.41  Dim.42
## Variance               0.018   0.017   0.017   0.016   0.015   0.015   0.014
## % of var.              1.152   1.092   1.072   1.019   0.993   0.950   0.924
## Cumulative % of var.  86.119  87.211  88.283  89.301  90.294  91.244  92.169
##                       Dim.43  Dim.44  Dim.45  Dim.46  Dim.47  Dim.48  Dim.49
## Variance               0.014   0.013   0.012   0.011   0.011   0.010   0.010
## % of var.              0.891   0.833   0.792   0.729   0.716   0.666   0.660
## Cumulative % of var.  93.060  93.893  94.684  95.414  96.130  96.796  97.456
##                       Dim.50  Dim.51  Dim.52  Dim.53  Dim.54
## Variance               0.009   0.009   0.008   0.007   0.006
## % of var.              0.605   0.584   0.519   0.447   0.390
## Cumulative % of var.  98.060  98.644  99.163  99.610 100.000
## 
## Individuals (the 10 first)
##                  Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3    ctr
## 1             | -0.580  1.246  0.174 |  0.155  0.098  0.012 |  0.052  0.013
## 2             | -0.376  0.522  0.108 |  0.293  0.350  0.066 | -0.164  0.127
## 3             |  0.083  0.026  0.004 | -0.155  0.099  0.015 |  0.122  0.071
## 4             | -0.569  1.196  0.236 | -0.273  0.304  0.054 | -0.019  0.002
## 5             | -0.145  0.078  0.020 | -0.142  0.083  0.019 |  0.002  0.000
## 6             | -0.676  1.693  0.272 | -0.284  0.330  0.048 | -0.021  0.002
## 7             | -0.191  0.135  0.027 |  0.020  0.002  0.000 |  0.141  0.095
## 8             | -0.043  0.007  0.001 |  0.108  0.047  0.009 | -0.089  0.038
## 9             | -0.027  0.003  0.000 |  0.267  0.291  0.049 |  0.341  0.553
## 10            |  0.205  0.155  0.028 |  0.366  0.546  0.089 |  0.281  0.374
##                 cos2  
## 1              0.001 |
## 2              0.021 |
## 3              0.009 |
## 4              0.000 |
## 5              0.000 |
## 6              0.000 |
## 7              0.015 |
## 8              0.006 |
## 9              0.080 |
## 10             0.052 |
## 
## Categories (the 10 first)
##                  Dim.1    ctr   cos2 v.test    Dim.2    ctr   cos2 v.test  
## breakfast     |  0.182  0.504  0.031  3.022 |  0.020  0.007  0.000  0.330 |
## Not.breakfast | -0.168  0.465  0.031 -3.022 | -0.018  0.006  0.000 -0.330 |
## Not.tea time  | -0.556  4.286  0.240 -8.468 |  0.004  0.000  0.000  0.065 |
## tea time      |  0.431  3.322  0.240  8.468 | -0.003  0.000  0.000 -0.065 |
## evening       |  0.276  0.830  0.040  3.452 | -0.409  2.006  0.087 -5.109 |
## Not.evening   | -0.144  0.434  0.040 -3.452 |  0.214  1.049  0.087  5.109 |
## lunch         |  0.601  1.678  0.062  4.306 | -0.408  0.854  0.029 -2.924 |
## Not.lunch     | -0.103  0.288  0.062 -4.306 |  0.070  0.147  0.029  2.924 |
## dinner        | -1.105  2.709  0.092 -5.240 | -0.081  0.016  0.000 -0.386 |
## Not.dinner    |  0.083  0.204  0.092  5.240 |  0.006  0.001  0.000  0.386 |
##                Dim.3    ctr   cos2 v.test  
## breakfast     -0.107  0.225  0.011 -1.784 |
## Not.breakfast  0.099  0.208  0.011  1.784 |
## Not.tea time   0.062  0.069  0.003  0.950 |
## tea time      -0.048  0.054  0.003 -0.950 |
## evening        0.344  1.653  0.062  4.301 |
## Not.evening   -0.180  0.864  0.062 -4.301 |
## lunch          0.240  0.343  0.010  1.719 |
## Not.lunch     -0.041  0.059  0.010 -1.719 |
## dinner         0.796  1.805  0.048  3.777 |
## Not.dinner    -0.060  0.136  0.048 -3.777 |
## 
## Categorical variables (eta2)
##                 Dim.1 Dim.2 Dim.3  
## breakfast     | 0.031 0.000 0.011 |
## tea.time      | 0.240 0.000 0.003 |
## evening       | 0.040 0.087 0.062 |
## lunch         | 0.062 0.029 0.010 |
## dinner        | 0.092 0.000 0.048 |
## always        | 0.056 0.035 0.007 |
## home          | 0.016 0.002 0.030 |
## work          | 0.075 0.020 0.022 |
## tearoom       | 0.321 0.019 0.031 |
## friends       | 0.186 0.061 0.030 |
plot(mca, invisible=c('ind'))


Chapter 6: Analysis of longidutinal data

In this chapter I am analyzing two longitudinal studies. First one, RATS, is investigating weight differences between 3 distinct groups over time. Second, BPRS, is comparing 2 different treatments effect on psychiatric evaluation outcome of subjects.

Data wrangling R script for this is exercise is found from here:
https://github.com/neabister/IODS-project/blob/master/data/meet_and_repeat.R

Import libraries, read and visualize data

First datasets are read from wrangling file and variables for subject/ID and treatment/group are converted back to factors for the analysis.

library(ggplot2)
library(dplyr)

rats_data <- read.csv('data/RATSL.csv')
bprs_data <- read.csv('data/BPRSL.csv')

rats_data$ID <- factor(rats_data$ID)
rats_data$Group <- factor(rats_data$Group)
bprs_data$subject <- factor(bprs_data$subject)
bprs_data$treatment <- factor(bprs_data$treatment)

glimpse(rats_data)
## Observations: 176
## Variables: 4
## $ ID     <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2,...
## $ Group  <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, 1, ...
## $ weight <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 555, ...
## $ day    <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, 8, ...
glimpse(bprs_data)
## Observations: 360
## Variables: 4
## $ treatment <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ subject   <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17...
## $ bprs      <int> 42, 58, 54, 55, 72, 48, 71, 30, 41, 57, 30, 55, 36, 38, 6...
## $ week      <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...

RATS - Graphical display and Summary Measure Analysis

Lineplot of the longitudinal values separated for all 3 groups

ggplot(rats_data, aes(x = day, y = weight, linetype = ID)) + geom_line() + scale_linetype_manual(values = rep(1:10, times=4)) + facet_grid(. ~ Group, labeller = label_both) + theme(legend.position = "none") + scale_y_continuous(limits = c(min(rats_data$weight), max(rats_data$weight)))

Above graphs show that all Groups have one rat that is different from others, possibly outliers. This is clearest in Group 2. It can be seen also that overall Group 1 weights are much lower than Group 2 and 3 from start until the end. Furthermore, it can be seen that the weights are increasing in time in all groups but perhaps there could be diferences in the rate of increase (slope)…

Same from standardized data

n <- rats_data$day %>% unique() %>% length()

rats_data <- rats_data %>% group_by(day) %>% mutate(stdardized = (weight - mean(weight))/sd(weight)) %>% ungroup()

# Summary data with mean and standard error of bprs by treatment and week 
rats_data_summary <- rats_data %>%
  group_by(Group, day) %>%
  summarise(mean = mean(stdardized), se = sd(stdardized)/sqrt(n) ) %>%
  ungroup()

glimpse(rats_data_summary)
## Observations: 33
## Variables: 4
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2...
## $ day   <int> 1, 8, 15, 22, 29, 36, 43, 44, 50, 57, 64, 1, 8, 15, 22, 29, 3...
## $ mean  <dbl> -0.9166792, -0.9213162, -0.9270291, -0.9235906, -0.9253853, -...
## $ se    <dbl> 0.03648416, 0.03188681, 0.02715453, 0.03226803, 0.02585809, 0...
ggplot(rats_data_summary, aes(x = day, y = mean, col=Group)) +
  geom_line() +
  geom_point(size=3) +
  geom_errorbar(aes(ymin = mean - se, ymax = mean + se), width=0.3) +
  scale_y_continuous(name = "mean(standardized weight) +/- se(standardized weight)")

Scaling visibly reduced tracking.

Grouped plot (mean + SEM)

ggplot(rats_data, aes(x = day, y = stdardized, col = Group)) +
geom_boxplot() +  scale_y_continuous(limits = c(min(rats_data$stdardized), max(rats_data$stdardized)))

Barplots side by side to evaluate outliers

p1 <- ggplot(rats_data, aes(x = factor(day), y = weight, col = Group))
p2 <- p1 + geom_boxplot(position = position_dodge(width = 0.9))
p3 <- p2 + theme_bw() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())
p4 <- p3 + scale_x_discrete(name = "day")

p4

Outlier removal

From previous tab bar plot it can be seen that all three groups have one outlier value present in almost all timepoint, if not in all. Next, I will try and remove these outliers.

#Filter each group to their own df

group_1 <- rats_data %>% filter(Group == 1)
group_2 <- rats_data %>% filter(Group == 2)
group_3 <- rats_data %>% filter(Group == 3)

n_1 <- group_1$ID %>% unique() %>% length()
n_2 <- group_2$ID %>% unique() %>% length()
n_3 <- group_3$ID %>% unique() %>% length()

n_1
## [1] 8
n_2
## [1] 4
n_3
## [1] 4
#Print summaries to see lowest and highest values for each group to decide the limits for outlier, trying with 1st or 3rd quartile for lowest/upper limit
summary(group_1)
##        ID     Group      weight           day          stdardized     
##  1      :11   1:88   Min.   :225.0   Min.   : 1.00   Min.   :-1.1618  
##  2      :11   2: 0   1st Qu.:255.0   1st Qu.:15.00   1st Qu.:-0.9593  
##  3      :11   3: 0   Median :267.0   Median :36.00   Median :-0.9104  
##  4      :11          Mean   :263.7   Mean   :33.55   Mean   :-0.9265  
##  5      :11          3rd Qu.:274.0   3rd Qu.:50.00   3rd Qu.:-0.8753  
##  6      :11          Max.   :284.0   Max.   :64.00   Max.   :-0.7229  
##  (Other):22
summary(group_2)
##        ID     Group      weight           day          stdardized    
##  9      :11   1: 0   Min.   :405.0   Min.   : 1.00   Min.   :0.3105  
##  10     :11   2:44   1st Qu.:444.5   1st Qu.:15.00   1st Qu.:0.4543  
##  11     :11   3: 0   Median :457.0   Median :36.00   Median :0.5405  
##  12     :11          Mean   :484.7   Mean   :33.55   Mean   :0.7680  
##  1      : 0          3rd Qu.:510.8   3rd Qu.:50.00   3rd Qu.:0.8698  
##  2      : 0          Max.   :628.0   Max.   :64.00   Max.   :1.6330  
##  (Other): 0
summary(group_3)
##        ID     Group      weight           day          stdardized    
##  13     :11   1: 0   Min.   :465.0   Min.   : 1.00   Min.   :0.7749  
##  14     :11   2: 0   1st Qu.:513.8   1st Qu.:15.00   1st Qu.:0.9954  
##  15     :11   3:44   Median :530.0   Median :36.00   Median :1.1461  
##  16     :11          Mean   :525.8   Mean   :33.55   Mean   :1.0850  
##  1      : 0          3rd Qu.:543.0   3rd Qu.:50.00   3rd Qu.:1.1893  
##  2      : 0          Max.   :569.0   Max.   :64.00   Max.   :1.3440  
##  (Other): 0
#Remove outliers by filtering
group_1 <- group_1 %>% filter(weight > 255)
group_2 <- group_2 %>% filter(weight < 511)
group_3 <- group_3 %>% filter(weight > 513)

n_1 <- group_1$ID %>% unique() %>% length()
n_2 <- group_2$ID %>% unique() %>% length()
n_3 <- group_3$ID %>% unique() %>% length()

n_1
## [1] 7
n_2
## [1] 3
n_3
## [1] 4
#Bind back together
no_outliers <- rbind(group_1, group_2, group_3)

#Check how the plot looks now
p1 <- ggplot(no_outliers, aes(x = factor(day), y = weight, col = Group))
p2 <- p1 + geom_boxplot(position = position_dodge(width = 0.9))
p3 <- p2 + theme_bw() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())
p4 <- p3 + scale_x_discrete(name = "day")

p4

After trying to remove outliers with above method, it can be seen that we get rid of outliers in Group 1 but for other two groups it does not work. For Group 3, the 1st Quartile limit was not enough to remove the outlier and from Group 2 one sample was removed with 3rd Quartile limit. Since in Groups 2 and 3 the n number was already low (4) in the beginning, I suggest that detecting real outliers is not accurate enough to confidently remove any outliers from these groups. So I will continue by removing only one rat from Group 1, simply using filter on the original dataframe:

no_outliers_rats <- rats_data %>% filter(weight >255)

p1 <- ggplot(no_outliers, aes(x = factor(day), y = weight, col = Group))
p2 <- p1 + geom_boxplot(position = position_dodge(width = 0.9))
p3 <- p2 + theme_bw() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())
p4 <- p3 + scale_x_discrete(name = "day")

p4

lm fitting and Anova - NOT FINISHED

In the example for BPRS data, baseline was filtered out from the data and used as one of the variables in linear regression fitting. In this RATS data I am not sure if day 1 could be considered as baseline but I am going to do that anyway, since if it is not included to the model separately, it will give a result that there is significant difference between the groups even though it can be just because of the baseline (and I think that it is true).

#Filter out baseline
no_outliers_ratsS <- no_outliers_rats %>%
  filter(day > 1) %>%
  group_by(Group, ID) %>%
  summarise(mean=mean(weight)) %>%
  ungroup()
glimpse(no_outliers_ratsS)
## Observations: 15
## Variables: 3
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3
## $ ID    <fct> 1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
## $ mean  <dbl> 265.8750, 266.0000, 270.2500, 272.6667, 276.2000, 274.6000, 2...
#Add baseline column to the filtered data
#baseline <- $week0
#BPRSL8S2 <- BPRSL8S %>%
 # mutate(baseline)


no_outliers_ratsS <- no_outliers_rats %>% group_by(Group, ID) %>% summarise( mean=mean(weight) ) %>% ungroup()
glimpse(no_outliers_ratsS)
## Observations: 15
## Variables: 3
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3
## $ ID    <fct> 1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
## $ mean  <dbl> 265.8750, 266.0000, 269.1111, 272.6667, 274.7273, 274.6364, 2...
fit <- lm(mean ~ Group, data = no_outliers_ratsS)
summary(fit)
## 
## Call:
## lm(formula = mean ~ Group, data = no_outliers_ratsS)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -43.886 -17.142  -1.161   6.239 105.750 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   270.27      14.05  19.242 2.19e-10 ***
## Group2        214.43      23.29   9.206 8.69e-07 ***
## Group3        255.52      23.29  10.970 1.31e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 37.16 on 12 degrees of freedom
## Multiple R-squared:  0.9267, Adjusted R-squared:  0.9145 
## F-statistic: 75.85 on 2 and 12 DF,  p-value: 1.552e-07
anova(fit)
## Analysis of Variance Table
## 
## Response: mean
##           Df Sum Sq Mean Sq F value    Pr(>F)    
## Group      2 209511  104756  75.851 1.552e-07 ***
## Residuals 12  16573    1381                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

BPRS data - Linear Mixed Effects Models for Normal Response Variables

First we will just plot the BPRS data, ignoring the repeated measures -caused dependencies and see how it looks to fit basic linear regression to later demonstrate the difference from methods for repeated measures data.

p1 <- ggplot(bprs_data, aes(x = week, y = bprs, group = subject))
p2 <- p1 + geom_point(aes(color = treatment))
p3 <- p2 + scale_x_continuous(name = "Week")
p4 <- p3 + scale_y_continuous(name = "Bprs")
p5 <- p4 + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())
p5

Lineplot to show individual rats profiles over time

ggplot(bprs_data, aes(x = week, y = bprs, linetype = subject)) + geom_line(aes(color = treatment))

By initial observation there is no clear pattern of values with certain treatment but the measurements are overlapping. There might be one outlier in week 1 time-point, since one value is much higher than others in any timepoint.

bprs_lm <- lm(bprs ~ week + treatment, data = bprs_data)
summary(bprs_lm)
## 
## Call:
## lm(formula = bprs ~ week + treatment, data = bprs_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -22.454  -8.965  -3.196   7.002  50.244 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  46.4539     1.3670  33.982   <2e-16 ***
## week         -2.2704     0.2524  -8.995   <2e-16 ***
## treatment2    0.5722     1.3034   0.439    0.661    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 12.37 on 357 degrees of freedom
## Multiple R-squared:  0.1851, Adjusted R-squared:  0.1806 
## F-statistic: 40.55 on 2 and 357 DF,  p-value: < 2.2e-16